In [1]:
using Pnums

Uncontrolled Cancellation

Cancelation can cause an arbitrary increase in relative bounds width in a single step.

Start with the single ulp-wide Pnum above 0:


In [10]:
eps = nextpnum(pn8"0")


Out[10]:
pn8"(0, /224)"

Exponentiation produces the single ulp-wide Pbound above 1:


In [6]:
x1 = exp(eps)


Out[6]:
pb8"(1, 5/4)"

In [7]:
length(eachpnum(x1))


Out[7]:
1

Subtracting 1 leaves the absolute width the same (1/4), but greatly increases the relative width.


In [8]:
x2 = exp(eps) - 1


Out[8]:
pb8"(0, /4)"

In [9]:
length(eachpnum(x2))


Out[9]:
47

In this case, we went from 1 ulp wide to 47 ulps wide in a single subtraction.

Scaling the result can make the absolute width arbitrarily large:


In [12]:
x3 = 16*(exp(eps) - 1)


Out[12]:
pb8"(0, 4)"

In [13]:
length(eachpnum(x3))


Out[13]:
79

Cancelation is a separate problem from the "dependency problem," and can't be addressed by splitting and reassembling sets. It can be addressed by using fused operations, but this requires careful analysis of each problem, and a well-developed set of fused operations.

Julia has a built in expm1 function for computing exp(x) - 1 without cancellation, and the same function could be implemented for Pnums.


In [14]:
exp(1e-32)-1


Out[14]:
0.0

In [15]:
expm1(1e-32)


Out[15]:
1.0e-32

Dependency problem within a single ulp

The "dependency problem" is a well-known problem that causes interval arithmetic to produce overly pessimistic bounds when a single interval-valued variable is used more than once in an expression. By default, interval arithmetic "forgets about correlations" between pieces of an expression.

A simple example that illustrates the problem is subtracting a number from itself.


In [17]:
f(x) = x - x


Out[17]:
f (generic function with 1 method)

This function is 0 for any single real number, but given an interval, it produces a wider interval.


In [20]:
f(pb8"[1, 2]")


Out[20]:
pb8"[-1, 1]"

You can reduce the effect of this problem by splitting interval-valued or set-valued variables into smaller subsets, running the computation on each subset, and reassembling the pieces at the end.

The Unums 2.0 proposal suggests automatically splitting variables that might suffer from the dependency problem into ulp-wide pieces. Pnums doesn't yet implement this strategy, but you can simulate it manually.


In [22]:
let out = pb8"empty"
    for x in eachpnum(pb8"[1,2]")
        # shortestcover is a stand-in for set-union:
        # it gives the shortest Pbound that covers its inputs
        out = shortestcover(out, f(x))
      end
    out
end


Out[22]:
pb8"(-1/4, /4)"

Notice that this has reduced (but not completely eliminated) the effect of the dependency problem.

This strategy can be implemented as a one-liner with mapreduce:


In [23]:
mapreduce(f, shortestcover, eachpnum(pb8"[1, 2]"))


Out[23]:
pb8"(-1/4, /4)"

In some cases, the dependency problem can cause catostrophic growth of even ulp-wide inputs. A simple example is x/x close to 0.


In [24]:
g(x) = x/x


Out[24]:
g (generic function with 1 method)

In [25]:
g(nextpnum(pn8"0"))


Out[25]:
pb8"(0, /0)"

In this case, a single ulp-wide input grows to cover all positive numbers in a single step.

Gustafson has claimed that the automatic splitting strategy means that "uncertainty grows linearly in general". However, the example of x/x shows that in some cases, the dependency problem can still cause arbitrarily large growth in uncertainty in a single step, even with the automatic splitting strategy.

Even so, interval or set splitting can obviously help in some cases (as in the x-x example). I would like to have a better understanding of when its benefits justify its costs.


In [ ]: